robustness verification
OvercomingtheConvexBarrierforSimplexInputs: SupplementaryMaterial
Equivalently,ifwe demonstrate that anylinear function obtains the same maximum value overS asoverCH,theproofiscomplete. Figure 1inourmain paper compares thetightness between different relaxations. For using the Anderson relaxation, we need to replace the input simplex with the unit hypercube. The relaxation first uses Kuhn triangulation of[0,1]n [Todd, 1976], which is used to describe the collection of simplices whose union is[0,1]n. Then constraintsr1 andr2 are constructed using these two triangles. In contrast, our relaxation works directly with the input simplex and only requires one upper constraint.
Robustness Verification of Tree-based Models
We study the robustness verification problem of tree based models, including random forest (RF) and gradient boosted decision tree (GBDT). Formal robustness verification of decision tree ensembles involves finding the exact minimal adversarial perturbation or a guaranteed lower bound of it. Existing approaches cast this verification problem into a mixed integer linear programming (MILP) problem, which finds the minimal adversarial distortion in exponential time so is impractical for large ensembles. Although this verification problem is NP-complete in general, we give a more precise complexity characterization. We show that there is a simple linear time algorithm for verifying a single tree, and for tree ensembles the verification problem can be cast as a max-clique problem on a multi-partite boxicity graph. For low dimensional problems when boxicity can be viewed as constant, this reformulation leads to a polynomial time algorithm. For general problems, by exploiting the boxicity of the graph, we devise an efficient verification algorithm that can give tight lower bounds on robustness of decision tree ensembles, and allows iterative improvement and any-time termination. On RF/GBDT models trained on a variety of datasets, we significantly outperform the lower bounds obtained by relaxing the MILP formulation into a linear program (LP), and are hundreds times faster than solving MILPs to get the exact minimal adversarial distortion. Our proposed method is capable of giving tight robustness verification bounds on large GBDTs with hundreds of deep trees.
Robustness Verification of Graph Neural Networks Via Lightweight Satisfiability Testing
Lu, Chia-Hsuan, Tan, Tony, Benedikt, Michael
Graph neural networks (GNNs) are the predominant architecture for learning over graphs. As with any machine learning model, and important issue is the detection of adversarial attacks, where an adversary can change the output with a small perturbation of the input. Techniques for solving the adversarial robustness problem - determining whether such an attack exists - were originally developed for image classification, but there are variants for many other machine learning architectures. In the case of graph learning, the attack model usually considers changes to the graph structure in addition to or instead of the numerical features of the input, and the state of the art techniques in the area proceed via reduction to constraint solving, working on top of powerful solvers, e.g. for mixed integer programming. We show that it is possible to improve on the state of the art in structural robustness by replacing the use of powerful solvers by calls to efficient partial solvers, which run in polynomial time but may be incomplete. We evaluate our tool RobLight on a diverse set of GNN variants and datasets.
- North America > United States > Texas (0.06)
- North America > United States > Wisconsin (0.06)
- Europe > Slovenia > Drava > Municipality of Benedikt > Benedikt (0.05)
- (3 more...)
- Information Technology > Security & Privacy (0.34)
- Government > Military (0.34)
Mini-Batch Robustness Verification of Deep Neural Networks
Tzour-Shaday, Saar, Drachsler-Cohen, Dana
Neural network image classifiers are ubiquitous in many safety-critical applications. However, they are susceptible to adversarial attacks. To understand their robustness to attacks, many local robustness verifiers have been proposed to analyze $ε$-balls of inputs. Yet, existing verifiers introduce a long analysis time or lose too much precision, making them less effective for a large set of inputs. In this work, we propose a new approach to local robustness: group local robustness verification. The key idea is to leverage the similarity of the network computations of certain $ε$-balls to reduce the overall analysis time. We propose BaVerLy, a sound and complete verifier that boosts the local robustness verification of a set of $ε$-balls by dynamically constructing and verifying mini-batches. BaVerLy adaptively identifies successful mini-batch sizes, accordingly constructs mini-batches of $ε$-balls that have similar network computations, and verifies them jointly. If a mini-batch is verified, all its $ε$-balls are proven robust. Otherwise, one $ε$-ball is suspected as not being robust, guiding the refinement. BaVerLy leverages the analysis results to expedite the analysis of that $ε$-ball as well as the analysis of the mini-batch with the other $ε$-balls. We evaluate BaVerLy on fully connected and convolutional networks for MNIST and CIFAR-10. Results show that BaVerLy scales the common one by one verification by 2.3x on average and up to 4.1x, in which case it reduces the total analysis time from 24 hours to 6 hours.
- North America > Canada > Ontario > Toronto (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Switzerland (0.04)
- (8 more...)
- Information Technology > Security & Privacy (0.48)
- Information Technology > Robotics & Automation (0.46)
- Government > Military (0.34)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.68)
Reviews: Robustness Verification of Tree-based Models
Originality: The robustness verification methods presented in the paper is new and interesting. The authors provided a fair list of related work and compared the existing methods with their method in the experiment section. Quality: The paper provides a complete presentation of three verification methods, 1) verifying the robustness of a single decision tree, 2) verifying the robustness of a tree ensemble using existing algorithms for finding k-cliques, and 3) a fast and approximate method for estimating a lower bound on the robustness. The theoretical claims and their proofs make sense to me. Overall the empirical evaluation is well designed and convincing.
Scalable and Explainable Verification of Image-based Neural Network Controllers for Autonomous Vehicles
Parameshwaran, Aditya, Wang, Yue
Existing formal verification methods for image-based neural network controllers in autonomous vehicles often struggle with high-dimensional inputs, computational inefficiency, and a lack of explainability. These challenges make it difficult to ensure safety and reliability, as processing high-dimensional image data is computationally intensive and neural networks are typically treated as black boxes. To address these issues, we propose \textbf{SEVIN} (Scalable and Explainable Verification of Image-Based Neural Network Controllers), a framework that leverages a Variational Autoencoders (VAE) to encode high-dimensional images into a lower-dimensional, explainable latent space. By annotating latent variables with corresponding control actions, we generate convex polytopes that serve as structured input spaces for verification, significantly reducing computational complexity and enhancing scalability. Integrating the VAE's decoder with the neural network controller allows for formal and robustness verification using these explainable polytopes. Our approach also incorporates robustness verification under real-world perturbations by augmenting the dataset and retraining the VAE to capture environmental variations. Experimental results demonstrate that SEVIN achieves efficient and scalable verification while providing explainable insights into controller behavior, bridging the gap between formal verification techniques and practical applications in safety-critical systems.
- North America > United States > California > Orange County > Irvine (0.05)
- North America > United States > South Carolina (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Africa > Middle East > Tunisia > Ben Arous Governorate > Ben Arous (0.04)
- Transportation (0.48)
- Information Technology (0.46)
Robustness Verification of Tree-based Models
We study the robustness verification problem of tree based models, including random forest (RF) and gradient boosted decision tree (GBDT). Formal robustness verification of decision tree ensembles involves finding the exact minimal adversarial perturbation or a guaranteed lower bound of it. Existing approaches cast this verification problem into a mixed integer linear programming (MILP) problem, which finds the minimal adversarial distortion in exponential time so is impractical for large ensembles. Although this verification problem is NP-complete in general, we give a more precise complexity characterization. We show that there is a simple linear time algorithm for verifying a single tree, and for tree ensembles the verification problem can be cast as a max-clique problem on a multi-partite boxicity graph.
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (0.89)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Mathematical & Statistical Methods (0.79)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.61)
Reviews: Scaling provable adversarial defenses
Based on the rebuttal letter, in the final version I'd suggest emphasizing the provable defense is guaranteed in probabilistic sense. Even though I agree in test time the geometric estimator is not necessary, what you indeed certified are training data, instead of test data. This is a nice piece of work and I enjoy reading it. In my opinion, this work has made important contributions in norm-bounded robustness verification by proposing a scalable and more generic toolkit for robustness certification. The autodual framework is both theoretically grounded and algorithmically efficient. However, I also have two major concerns about this work: (I) the proposed nonlinear random projection leads to an estimated (i.e., probabilistic) lower bound of the minimum distortion towards misclassification, which is a soft robustness certification and does not follow the mainstream definition of deterministic lower bound; (II) Since this method yields an estimated lower bound, it then lacks performance comparison to existing bound estimation methods.
Automated Design of Linear Bounding Functions for Sigmoidal Nonlinearities in Neural Networks
König, Matthias, Zhang, Xiyue, Hoos, Holger H., Kwiatkowska, Marta, van Rijn, Jan N.
The ubiquity of deep learning algorithms in various applications has amplified the need for assuring their robustness against small input perturbations such as those occurring in adversarial attacks. Existing complete verification techniques offer provable guarantees for all robustness queries but struggle to scale beyond small neural networks. To overcome this computational intractability, incomplete verification methods often rely on convex relaxation to over-approximate the nonlinearities in neural networks. Progress in tighter approximations has been achieved for piecewise linear functions. However, robustness verification of neural networks for general activation functions (e.g., Sigmoid, Tanh) remains under-explored and poses new challenges. Typically, these networks are verified using convex relaxation techniques, which involve computing linear upper and lower bounds of the nonlinear activation functions. In this work, we propose a novel parameter search method to improve the quality of these linear approximations. Specifically, we show that using a simple search method, carefully adapted to the given verification problem through state-of-the-art algorithm configuration techniques, improves the average global lower bound by 25% on average over the current state of the art on several commonly used local robustness verification benchmarks.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Europe > Netherlands > South Holland > Leiden (0.04)
- Europe > Germany (0.04)